Deceptive Logic Locking for Hardware Integrity Protection Against Machine Learning Attacks

نویسندگان

چکیده

Logic locking has emerged as a prominent key-driven technique to protect the integrity of integrated circuits. However, novel machine-learning-based attacks have recently been introduced challenge security foundations schemes. These are able recover significant percentage key without having access an activated circuit. This article address this issue through two focal points. First, we present theoretical model test schemes for key-related structural leakage that can be exploited by machine learning. Second, based on model, introduce D-MUX: deceptive multiplexer-based logic-locking scheme is resilient against structure-exploiting learning attacks. Through design D-MUX, uncover major fallacy in existing form structural-analysis attack. Finally, extensive cost evaluation D-MUX presented. To best our knowledge, first machine-learning-resilient capable protecting all known learning-based Hereby, presented work offers starting point and future-generation logic era

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hardware Attacks Against Smart Cards

In this report we discuss several hardware attacks against smart cards. We describe a number of successful attacks against cards such as pay-TV cards and pre-paid phone cards and how these attacks has since been mitigated. We have also looked into a class of attacks called Optical Fault Induction Attacks and described how they work and what a designer can do to prevent them.

متن کامل

Protection Mechanisms Against Phishing Attacks

Approaches against Phishing can be classified into modifications of the traditional PIN/TAN-authentication on the one hand and approaches that try to reduce the probability of a scammer being successful without changing the existing PIN/TAN-method on the other hand. We present a new approach, based on challenge-response-authentication. Since our proposal does not require any new hardware on the...

متن کامل

Noise-Tolerant Machine Learning Attacks against Physically Unclonable Functions

Along with the evolution of Physically Unclonable Functions (PUFs) numerous successful attacks against PUFs have been proposed in the literature. Among these are machine learning (ML) attacks, ranging from heuristic approaches to provable algorithms, that have attracted great attention. Nevertheless, the issue of dealing with noise has so far not been addressed in this context. Thus, it is not ...

متن کامل

Evasion Attacks against Machine Learning at Test Time

In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to syste...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

سال: 2022

ISSN: ['1937-4151', '0278-0070']

DOI: https://doi.org/10.1109/tcad.2021.3100275